Goto

Collaborating Authors

 policy recommendation


Exploring Equity of Climate Policies using Multi-Agent Multi-Objective Reinforcement Learning

Biswas, Palok, Osika, Zuzanna, Tamassia, Isidoro, Whorra, Adit, Zatarain-Salazar, Jazmin, Kwakkel, Jan, Oliehoek, Frans A., Murukannaiah, Pradeep K.

arXiv.org Artificial Intelligence

Addressing climate change requires coordinated policy efforts of nations worldwide. These efforts are informed by scientific reports, which rely in part on Integrated Assessment Models (IAMs), prominent tools used to assess the economic impacts of climate policies. However, traditional IAMs optimize policies based on a single objective, limiting their ability to capture the trade-offs among economic growth, temperature goals, and climate justice. As a result, policy recommendations have been criticized for perpetuating inequalities, fueling disagreements during policy negotiations. We introduce Justice, the first framework integrating IAM with Multi-Objective Multi-Agent Reinforcement Learning (MOMARL). By incorporating multiple objectives, Justice generates policy recommendations that shed light on equity while balancing climate and economic goals. Further, using multiple agents can provide a realistic representation of the interactions among the diverse policy actors. We identify equitable Pareto-optimal policies using our framework, which facilitates deliberative decision-making by presenting policymakers with the inherent trade-offs in climate and economic policy.


Measuring Political Preferences in AI Systems: An Integrative Approach

Rozado, David

arXiv.org Artificial Intelligence

Measuring Political Preferences in AI Systems - A n Integrative Approach David Rozado Political biases in Large Language Model (LLM) - based artificial intelligence (AI) systems, such as OpenAI ' s ChatGPT or Google ' s Gemini, have been previously reported . While several prior studies have attempted to quantify these biases using political orientation tests, such approaches are limited by potential tests ' calibration biases and constrained response formats that do not reflect real - world human - AI interaction s. This study employs a multi - method approach to assess political bias in leading AI systems, integrating four complementary methodologies: (1) linguistic comparison of AI - generated text with the language used by Republican and Democratic U.S. Congress mem bers, (2) analysis of political viewpoints embedded in AI - generated policy recommendations, (3) sentiment analysis of AI - generated text toward politically affiliated public figures, and (4) standardized political orientation testing. Results indicate a con sistent left - leaning bias across most contemporary AI systems, with arguably varying degrees of intensity. However, this bias is not an inherent feature of LLMs; prior research demonstrates that fine - tuning with politically skewed data can realign these mo dels across the ideological spectrum. The presence of systematic political bias in AI systems poses risks, including reduced viewpoint diversity, increased societal polarization, and the potential for public mistrust in AI technologies. To mitigate these r isks, AI systems should be designed to prioritize factual accuracy while maintaining neutrality on most lawful normative issues. Furthermore, independent monitoring platforms are necessary to ensure transparency, accountability, and responsible AI developm ent. Introduction Recent advancements in AI technology, exemplified by Large Language Models (LLMs) like ChatGPT, represent one of the most significant technological breakthroughs in recent decades. The ability of AI systems to understand and generate human - like natural language has unlocked new possibilities for automation, human - computer interaction, content generation, and information retrieval. However, th ese impressive capabilities ha ve also raised concerns abo ut the potential biases that such systems might harbor [1], [2], [3], [4] . Preliminary evidence has suggested that AI systems exhibit political biases in the textual content they generate [2], [5], [6] .


Instructor-Worker Large Language Model System for Policy Recommendation: a Case Study on Air Quality Analysis of the January 2025 Los Angeles Wildfires

Gao, Kyle, Lu, Dening, Li, Liangzhi, Chen, Nan, He, Hongjie, Xu, Linlin, Li, Jonathan

arXiv.org Artificial Intelligence

The Los Angeles wildfires of January 2025 caused more than 250 billion dollars in damage and lasted for nearly an entire month before containment. Following our previous work, the Digital Twin Building, we modify and leverage the multi-agent large language model framework as well as the cloud-mapping integration to study the air quality during the Los Angeles wildfires. Recent advances in large language models have allowed for out-of-the-box automated large-scale data analysis. We use a multi-agent large language system comprised of an Instructor agent and Worker agents. Upon receiving the users' instructions, the Instructor agent retrieves the data from the cloud platform and produces instruction prompts to the Worker agents. The Worker agents then analyze the data and provide summaries. The summaries are finally input back into the Instructor agent, which then provides the final data analysis. We test this system's capability for data-based policy recommendation by assessing our Instructor-Worker LLM system's health recommendations based on air quality during the Los Angeles wildfires.



Bipartisan panel urges Congress to toss out decades of trade policy they say China has been exploiting

FOX News

President Biden and China's President Xi Jinping met on Saturday, Nov. 16, 2024, at the APEC Summit in Lima, Peru. A federal China commission released its sprawling yearly report to Congress on Tuesday, for the first time recommending lawmakers end China's favored trade status and the provision that allows goods under 800 to enter the U.S. duty-free. The U.S.-China Economic and Security Review Commission, established by Congress as a bipartisan entity to investigate and provide policy recommendations on China, is now directly advocating for Congress to end the Permanent Normal Trade Relations (PNTR) China has enjoyed since 2004. The committee will pitch its 83 policy recommendations to lawmakers on Tuesday, along with a report on China's military capabilities, its threats to U.S. allies in the region and how it is exploiting U.S. policy for its own advancement. "For decades we have engaged in whack-a-mole policy working within international organizations and guidelines to address the increasing and ambitious efforts by China to skirt laws or take advantage of trade loopholes," commission chair Robin Cleveland said. "In our hearing on the threats to American consumers this year we heard from administration and expert witnesses who were starkly clear: U.S. agencies do not know if the majority of packages coming from China include a baby toy painted with a toxic chemical--a counterfeit piece of clothing made with slave labor--or a pin head amount of fentanyl which is enough to kill the average citizen."


Finding General Equilibria in Many-Agent Economic Simulations Using Deep Reinforcement Learning

Curry, Michael, Trott, Alexander, Phade, Soham, Bai, Yu, Zheng, Stephan

arXiv.org Artificial Intelligence

Real economies can be seen as a sequential imperfect-information game with many heterogeneous, interacting strategic agents of various agent types, such as consumers, firms, and governments. Dynamic general equilibrium models are common economic tools to model the economic activity, interactions, and outcomes in such systems. However, existing analytical and computational methods struggle to find explicit equilibria when all agents are strategic and interact, while joint learning is unstable and challenging. Amongst others, a key reason is that the actions of one economic agent may change the reward function of another agent, e.g., a consumer's expendable income changes when firms change prices or governments change taxes. We show that multi-agent deep reinforcement learning (RL) can discover stable solutions that are epsilon-Nash equilibria for a meta-game over agent types, in economic simulations with many agents, through the use of structured learning curricula and efficient GPU-only simulation and training. Conceptually, our approach is more flexible and does not need unrealistic assumptions, e.g., market clearing, that are commonly used for analytical tractability. Our GPU implementation enables training and analyzing economies with a large number of agents within reasonable time frames, e.g., training completes within a day. We demonstrate our approach in real-business-cycle models, a representative family of DGE models, with 100 worker-consumers, 10 firms, and a government who taxes and redistributes. We validate the learned meta-game epsilon-Nash equilibria through approximate best-response analyses, show that RL policies align with economic intuitions, and that our approach is constructive, e.g., by explicitly learning a spectrum of meta-game epsilon-Nash equilibria in open RBC models.


Media Advisory -- MIT researchers: AI policy needed to manage impacts, build more equitable systems

#artificialintelligence

On Thursday, May 6 and Friday, May 7, the AI Policy Forum -- a global effort convened by researchers from MIT -- will present their initial policy recommendations aimed at managing the effects of artificial intelligence and building AI systems that better reflect society's values. Recognizing that there is unlikely to be any singular national AI policy, but rather public policies for the distinct ways in which we encounter AI in our lives, forum leaders will preview their preliminary findings and policy recommendations in three key areas: finance, mobility, and health care. The inaugural AI Policy Forum Symposium, a virtual event hosted by the MIT Schwarzman College of Computing, will bring together AI and public policy leaders, government officials from around the world, regulators, and advocates to investigate some of the pressing questions posed by AI in our economies and societies. The symposium's program will feature remarks from public policymakers helping shape governments' approaches to AI; state and federal regulators on the front lines of these issues; designers of self-driving cars and cancer-diagnosing algorithms; faculty examining the systems used in emerging finance companies and associated concerns; and researchers pushing the boundaries of AI. Media RSVP: Reporters interested in attending can register here.


What Policies Should India Emulate From The US's AI Playbook?

#artificialintelligence

The National Security Commission on Artificial Intelligence (NSCAI) recently published the Final Report for 2021 outlining an integrated national strategy to empower the US in the era of AI-accelerated competition and conflict. NSCAI worked with technologists, national security professionals, business executives and academic leaders to put out the report. According to the report, the US government is a long way from being "AI-ready." Based on the findings, the commission has proposed a set of policy recommendations. The US leads in almost all AI parameters than most countries, including India.


2019 - World Leadership Alliance - Club de Madrid

#artificialintelligence

Around 35 Members of World Leadership Alliance-Club de Madrid (WLA-CdM), all democratic former Heads of State and Government, will join representatives from governments, civil society, academia and tech companies in a discussion to define policy solutions that address the challenges of digital transformation and artificial intelligence (AI) in our societies. Vaira Vike-Freiberga, President of WLA-CdM and former President of Latvia, warns of the big governance challenges that the technological transformation brings along, while also acknowledging its great opportunities. With this in mind, WLA-CdM has partnered with the IE School of Global & Public Affairs in the organization of its 2019 Policy Dialogue titled'Digital Transformation and the Future of Democracy: How can Artificial Intelligence Drive Democratic Governance?' "The rise of AI will change our societies in ways researchers are only beginning to examine, and democratic governments simply cannot afford to lag behind. We must govern the technological game before it governs us, not through censorship or trying to stop innovation, but by acquiring competencies and a better understanding of how AI could work for us", says President Vike-Freiberga. AI is set to bring about a radical transformation that will disrupt economic and social trends, raise new ethical dilemmas and change the existing balance of power between states. At a time of growing inequality and wide-spread mistrust of institutions, democracies will need to work out AI's rollout in our societies without giving up on their foundational values.


Marietje Schaake to Join Stanford Cyber Policy Center and Institute

#artificialintelligence

The Freeman Spogli Institute for International Studies (FSI) and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) are pleased to announce that Marietje Schaake has been named to international policy roles in each of their organizations. At FSI, Schaake will serve as the first international policy director of the Cyber Policy Center. With a focus on cybersecurity, disinformation, digital democracy and election security, the Cyber Policy Center's research, teaching and policy engagement aims to bring new insights and solutions to national governments, international institutions and industry. Schaake will also be an international policy fellow at Stanford HAI, which seeks to advance artificial intelligence (AI) research, education, policy and practice to improve the human condition. The university-wide institute is committed to working with industry, governments and civil society organizations that share the goal of a better future for humanity through AI.